作为一项具有挑战性的任务,文本到图像生成旨在根据给定的文本说明生成照片真实和语义一致的图像。现有方法主要从一个句子中提取文本信息,以表示图像,文本表示良好地影响生成图像的质量。但是,直接利用一个句子中的有限信息错过了一些关键属性描述,这是准确描述图像的关键因素。为了减轻上述问题,我们提出了一种有效的文本表示方法,并具有属性信息的补充。首先,我们构建一个属性内存,以用句子输入共同控制文本对图像生成。其次,我们探讨了两种更新机制,即样品感知和样本 - 关节机制,以动态优化广义属性存储器。此外,我们设计了一个属性句子结合条件生成器学习方案,以使多个表示的特征嵌入对齐,从而促进跨模式网络训练。实验结果表明,该提出的方法对CUB(FID从14.81到8.57)和可可(FID从21.42到12.39)的数据集获得了实质性改进。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
生存分析是事实建模的艺术,在临床治疗决策中起着重要作用。最近,已经提出了由神经ODE建立的连续时间模型进行生存分析。然而,由于神经ODE求解器的计算复杂性很高,神经ODE的训练很慢。在这里,我们提出了一种有效的替代方案,用于柔性连续时间模型,称为生存混合物密度网络(生存MDN)。生存MDN适用于混合密度网络(MDN)的输出的可逆阳性功能。尽管MDN产生灵活的实价分布,但可逆正函数将模型映射到时间域,同时保留可拖动密度。使用四个数据集,我们表明生存MDN的性能优于或类似于一致性的连续和离散时间基准,集成的brier得分和集成的二项式对数可能性。同时,生存MDN的速度也比基于ODE的模型和离散模型中规避的分类问题快。
translated by 谷歌翻译
采用重塑的常见做法是学习模糊和锐利图像对之间的差异,在端到端图像去孔架构之间的差异。从其模糊的对应物重建锐利图像需要有关低频和高频信息的变化。尽管传统的RESBLOCK可以具有良好的能力在捕获图像的高频分量时,但它倾向于俯视低频信息。此外,RESBLOCK通常无法富集地模拟在从其模糊的对应物中重建尖锐图像的不普通的远程信息。在本文中,我们介绍了一种剩余的快速傅里叶变换与卷积块(RES FFT-CONV块),能够捕获长期和短期交互,同时集成低频和高频残差。 RES FFT-CONC模块是一个概念简单但可计算的高效,即插即用块,导致不同架构中的表现增长显着。利用RES FFT-CONV块,我们进一步提出了一种基于MIMO-UNET的深度残留的傅里叶变换(DEEPRFT)框架,在GoPro,隐藏,Realblur和DPDD数据集上实现最先进的图像去孔性能。实验表明我们的DEEPRFT可以显着提高图像去掩饰性能(例如,与MIMO-UNET相比,Gopro Dataset上的PSNR上的1.09 dB改善),DEEPRFT +在GoPro数据集上达到PSNR中的33.23 dB。
translated by 谷歌翻译
通过最大可能性培训的深层模型已经为生存分析取得了最先进的结果。尽管采取了这一培训计划,从业人员可以在其他标准下评估模型,例如在选择的时间范围内的二进制分类损失,例如,如此。 Brier得分(BS)和Bernoulli日志似然(BLL)。由于最大可能性不直接优化这些标准,最大可能性培训的模型可能具有较差的BS或BLL。直接优化BS等标准需要通过审查分布逆加权,估计本身也需要失败分布的逆加权。但也不是众所周知的。为了解决这种困境,我们介绍了逆加权的生存游戏,以培训失败和审查模型以及BS或BLL等标准。在这些游戏中,每个模型的目标是由重量估计构建的,其中包含另一个模型,其中重新加权模型在训练期间固定。当损失是正确的时,我们表明游戏总是具有真正的失败和审查分布作为静止点。这意味着游戏中的模型不会留下正确的分布。我们构建一个静止点是独一无二的一个案例。我们表明这些游戏在模拟上优化了BS,然后在现实世界癌症和危险性患者数据上应用这些原则。
translated by 谷歌翻译
Temporal modeling is key for action recognition in videos. It normally considers both short-range motions and long-range aggregations. In this paper, we propose a Temporal Excitation and Aggregation (TEA) block, including a motion excitation (ME) module and a multiple temporal aggregation (MTA) module, specifically designed to capture both short-and long-range temporal evolution. In particular, for short-range motion modeling, the ME module calculates the feature-level temporal differences from spatiotemporal features. It then utilizes the differences to excite the motion-sensitive channels of the features. The long-range temporal aggregations in previous works are typically achieved by stacking a large number of local temporal convolutions. Each convolution processes a local temporal window at a time. In contrast, the MTA module proposes to deform the local convolution to a group of subconvolutions, forming a hierarchical residual architecture. Without introducing additional parameters, the features will be processed with a series of sub-convolutions, and each frame could complete multiple temporal aggregations with neighborhoods. The final equivalent receptive field of temporal dimension is accordingly enlarged, which is capable of modeling the long-range temporal relationship over distant frames. The two components of the TEA block are complementary in temporal modeling. Finally, our approach achieves impressive results at low FLOPs on several action recognition benchmarks, such as Kinetics, Something-Something, HMDB51, and UCF101, which confirms its effectiveness and efficiency.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译